1,026 research outputs found
An analysis strategy for large fault trees
In recent years considerable progress has been made on improving the efficiency and accuracy of the fault tree methodology. The majority of fault trees produced to model industrial systems can now be analysed very quickly on PC computers. However there can still be problems with very large fault tree structures such as those developed to model nuclear and aerospace systems. If the fault tree consists of a large number of basic events and gates and many of the events are
repeated, possibly several times within the structure, then the processing of the full problem may not be possible. In such circumstances the problem has to be reduced to a manageable size by discarding the less significant failure modes in the qualitative evaluation to produce only the most relevant minimal cut sets and approximations used to obtain the top event probability or frequency.
The method proposed uses a combination of analysis options each of which reduces the
complexity of the problem. A factorisation technique is first applied which is designed to reduce the ‘noise’ from the tree structure. Wherever possible, events which always appear together in the tree are combined together to create more complex, higher level events. A solution of the now reduced problem can always be expanded back out in terms of the original events. The
second stage is to identify independent sections of the fault tree which can be analysed separately. Finally the Binary Decision Diagram (BDD) technique is used to perform the quantification. Careful selection of the ordering applied to the basic events (variables) will again aid the
efficiency of the process
A finite element analysis of bending stresses induced in external and internal involute spur gears
This paper describes the use of the finite element method for predicting the fillet stress distribution experienced by loaded spur gears. The location of the finite element model boundary and the element mesh density are investigated. Fillet stresses predicted by the finite element model are compared with the results of photoelastic experiments. Both external and internal spur gear tooth forms are considered
Identifying the major contributions to risk in phased missions
Many systems operate phased missions. The mission
consists of a number of consecutive phases where the
functional requirement of the system changes during each
phase. A successful mission is the completion of each of the
consecutive phases. For non-repairable systems, efficient
analysis methods have recently been developed to predict the
mission unreliability. In the event that the predicted
performance falls below that which is required, modifications
are made to improve the design. In conventional system
failure analysis importance measures, which identify the
contribution each component makes to the failure, can be used
to identify the weaknesses. Importance measures relevant for
phased mission applications are developed in this paper
Component contributions to the failure of systems undergoing phased missions
The way that many systems are utilised can be expressed in terms of missions which
are split into a sequence of contiguous phases. Mission success is only achieved if
each of the phases is successful and each phase is required to achieve a different
objective and use different elements of the system.
The reliability analysis of a phased mission system will produce the probability of
failure during each of the phases together with the overall mission failure likelihood.
In the event that the system performance does not meet with the acceptance
requirement, weaknesses in the design are identified and improvements made to
rectify the deficiencies. In conventional system assessments, importance measures
can be predicted which provide a numerical indicator of the significance that each
component plays in the system failure. Through the development of appropriate
importance measures this paper provides ways of identifying the contribution made
by each component failure to each phase failure and the overall mission failure. In
addition a means to update the system performance prediction and the importance
measures as phases of the mission are successfully completed is given
To identify the smallest fault tree sections which contain dependencies.
Since the early 1960’s fault tree analysis has become the most frequently used technique to
quantify the likelihood of a particular system failure mode. One of the underlying
assumptions which justifies this approach is that the basic events are independent. However,
many systems feature component failure events for which the assumption of independence is
not valid. For example, standby dependency, maintenance dependency or sequential
dependency can be encountered in engineering systems. In such situations, Markov analysis
is required during the quantification process.
Since the efficiency of the Markov analysis largely depends on the size of the established
Markov model, it is most effective to apply the Markov method only to the smallest possible
fault tree sections containing dependencies. The remainder of the system assessment can be
performed by the application of the conventional assessment techniques. The key of this
approach is to extract from the fault tree the smallest sections which contain dependencies.
This paper gives a brief introduction on some main existing dependency types and provides a
method aimed at establishing the smallest Markov model for the dependencies contained
within the fault tree
Qualitative analysis of complex, modularised fault trees using binary decision diagrams.
Fault Tree Analysis is commonly used in the reliability assessment of industrial
systems. However, when complex systems are studied conventional methods can
become computationally intensive and require the use of approximations. This leads
to inaccuracies in evaluating system reliability. To overcome such disadvantages, the
Binary Decision Diagram (BDD) method has been developed. This method improves
accuracy and efficiency, because the exact solutions can be calculated without the
requirement to calculate minimal cut sets as an intermediate phase. Minimal cut sets
can be obtained if needed.
BDDs are already proving to be of considerable use in system reliability analysis.
However, the difficulty is with the conversion process of the fault tree to the BDD.
The ordering of the basic events can have a crucial effect on the size of the final
BDD, and previous research has failed to identify an optimum scheme for producing
BDDs for all fault trees. This paper presents an extended strategy for the analysis of
complex fault trees. The method utilises simplification rules, which are applied to the
fault tree to reduce it to a series of smaller subtrees, whose solution is equivalent to
the original fault tree. The smaller subtree units are less sensitive to the basic event
ordering during BDD conversion. BDDs are constructed for every subtree. Qualitative
analysis is performed on the set of BDDs to obtain the minimal cut sets for the
original top event. It is shown how to extract the minimal cut sets from complex and
modular events in order to obtain the minimal cut sets of the original fault tree in
terms of basic events
Efficient basic event orderings for binary decision diagrams
Over the last five years significant advances have been
made in methodologies to analyse the fault tree diagram.
The most successful of these developments has been the
Binary Decision Diagram (BDD) approach. The Binary
Decision Diagram approach has been shown to improve
both the efficiency of determining the minimal cut sets of
the fault tree ancl also the accuracy of the calculation
procedure used to determine the top event parameters. The
BDD technique povides a potential alternative to the
traditional approaches based on Kinetic Tree Theory.
To utilise the Binary Decision Diagram approach the
fault tree structure is first converted to the BDD format.
This conversion can be accomplished efficiently but
requires the basic events in the fault tree to be placed in an
ordering. A poor ordering can result in a Binary Decision
Diagram which is not an efficient representation of the fault
tree logic structure. The advantages to be gained by
utilising the BDD technique rely on the efficiency of the
ordering scheme. Alternative ordering schemes have been
investigated and no one scheme is appropriate for every
tree structure. Research to date has not found any rule
based means of determining the best way of ordering basic
events for a given fault tree structure.
The work presented in this paper takes a machine
learning approach based on Genetic Algorithms to select
the most appropriate ordering scheme. Features which
describe a fault tree structure have been identified and
these provide the inputs to the machine learning algorithm.
A set of possible ordering schemes has been selected based
on previous heuristic work. The objective of the work
detailed in the pap:r is to predict the most efficient of the
possible ordering alternatives from parameters which
describe a fault tree structure
Using statistically designed experiments for safety system optimization
This paper describes the method of statistically designed experiments (SDE's), used as a
structured method to investigate the best setting for a number of decision variables in a
system design problem. Traditionally, in the design of safety critical systems, a trial and
error type approach is undertaken to achieve a final system that meets the design
objectives. This approach can be time consuming and often only an adequate design is
found rather than the optimal design for the available resources. Optimal use of
resources should be imperative when possible lives are at risk. To demonstrate the
practicality of this new structured approach for optimising a safety system design, a high
integrity safety system has been used. Each design is analysed using the Binary Decision
Diagram analysis technique to establish the system unavailability, which is penalised if
the system constraints are exceeded. System constraints indicate the limitations on the resources which can be utilised. The SDE approach highlights good and bad settings for
possible design variables. This knowledge can then be used by more sophisticated search
techniques. The latter part of this paper analyses the results from the best design
generated using the SDE, for further optimisation using localised optimisation
approaches
A binary decision diagram method for phased mission analysis of non-repairable systems
Phased mission analysis is carried out to predict the reliability of systems which
undergo a series of phases, each with differing requirements for success, with the mission
objective being achieved only on the successful completion of all phases. Many systems from
a range of industries experience such missions. The methods used for phased mission
analysis are dependent upon the repairability of the system during the phases. If the system
is non-repairable, fault-tree-based methods offer an efficient solution. For repairable systems,
Markov approaches can be used.
This paper is concerned with the analysis of non-repairable systems. When the
phased mission failure causes are represented using fault trees, it is shown that the binary
decision diagram (BDD) method of analysis offers advantages in the solution process.
A new way in which BDD models can be efficiently developed for phased mission analysis
is proposed. The paper presents a methodology by which the phased mission models can
be developed and analysed to produce the phase failure modes and the phase failure
likelihoods
Analysis of fault trees with secondary failures
The Fault Tree methodology is appropriate when the component level failures (basic
events) occur independently. One situation where the conditions of independence are
not met occurs when secondary failure events appear in the fault tree structure.
Guidelines for fault tree construction, which have been utilised for many years,
encourage the inclusion of secondary failures along with primary failures and
command faults in the representation of the failure logic. The resulting fault tree is an
accurate representation of the logic but may produce inaccurate quantitative results
for the probability and frequency of system failure if methodologies are used which
reply on independence.
This paper illustrates how inaccurate these quantitative results can be. Alternative
approaches are developed by which fault trees of this type of structure can be
analysed
- …